fix: default OpenAI provider on /openai/v1/models converter (#2993)#3007
fix: default OpenAI provider on /openai/v1/models converter (#2993)#3007thiscantbeserious wants to merge 1 commit intomaximhq:mainfrom
Conversation
…2993) The RequestConverter for the /models route did not set Provider on the BifrostRequest. The dispatch layer treated an empty Provider as "list all", calling ListAllModels and fanning out to every configured provider (including Anthropic, Bedrock, etc.). Each non-OpenAI provider rejected the OpenAI token with its own 401, and the aggregated error surfaced to the caller with a non-OpenAI provider name and message. Fix: add a PreCallback that detects the Azure OpenAI SDK User-Agent and stores the flag in BifrostContext. The RequestConverter then reads that flag to default Provider to schemas.Azure for Azure SDK callers, or schemas.OpenAI otherwise. This matches the pattern used by other routes in the same file (e.g. /v1/realtime, /v1/responses). Callers that already set Provider explicitly are unaffected. Closes maximhq#2993 Extracted from maximhq#2775
📝 WalkthroughSummary by CodeRabbit
WalkthroughThis change fixes a bug where the Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.4)level=error msg="[linters_context] typechecking error: pattern ./...: directory prefix . does not contain main module or its selected dependencies" Warning Review ran into problems🔥 ProblemsTimed out fetching pipeline failures after 30000ms Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.Comment |
This fix for maximhq#2993 was originally included in this feature branch as a side effect. It has been extracted into a focused PR maximhq#3007 against main. Reverting the changes here so the feature branch no longer overlaps.
Confidence Score: 5/5Safe to merge — the fix is narrowly scoped, follows established patterns in the same file, and is covered by 8 new tests. No P0 or P1 findings. The logic is simple and correct: the provider default only fires when Provider is empty, the Azure flag is stored/read via the existing BifrostContextKeyIsAzureUserAgent mechanism, and the closures in the loop do not capture the loop variable. The fix mirrors the /realtime and /responses patterns used throughout the file. No files require special attention. Important Files Changed
Reviews (1): Last reviewed commit: "fix: default OpenAI provider on /openai/..." | Re-trigger Greptile |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
transports/bifrost-http/integrations/openai_list_models_test.go (1)
87-101: Add one more passthrough test with Azure UA (optional).Consider a variant of this test that uses Azure SDK
User-Agentand an explicit non-empty provider, to lock in “explicit provider wins” regardless of UA tagging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/openai_list_models_test.go` around lines 87 - 101, Add a second test variant that mirrors TestListModelsRequestConverter_RespectsExplicitProvider but simulates an Azure SDK User-Agent; call findListModelsRoute and applyPreCallback the same way, build the request via makeListModelsReq with a non-empty explicit provider (e.g., schemas.Azure) and invoke route.RequestConverter, then assert no error and that bifrostReq.ListModelsRequest.Provider still equals the explicit provider to ensure UA tagging does not overwrite an explicitly set provider.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@transports/bifrost-http/integrations/openai_list_models_test.go`:
- Around line 87-101: Add a second test variant that mirrors
TestListModelsRequestConverter_RespectsExplicitProvider but simulates an Azure
SDK User-Agent; call findListModelsRoute and applyPreCallback the same way,
build the request via makeListModelsReq with a non-empty explicit provider
(e.g., schemas.Azure) and invoke route.RequestConverter, then assert no error
and that bifrostReq.ListModelsRequest.Provider still equals the explicit
provider to ensure UA tagging does not overwrite an explicitly set provider.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bb5778cd-8697-4c5d-a882-abb88baa34ff
📒 Files selected for processing (2)
transports/bifrost-http/integrations/openai.gotransports/bifrost-http/integrations/openai_list_models_test.go
Summary
GET /openai/v1/modelswas fanning out to every configured provider because the RequestConverter did not setProvideron theBifrostRequest.Provideras "list all" and callsListAllModels, which queries every provider. Each non-OpenAI provider rejected the caller's OpenAI token, and the aggregated error surfaced to the caller with a non-OpenAI provider name and message.PreCallbackto detect the Azure OpenAI SDK User-Agent, then defaultsProvidertoschemas.Azure(Azure SDK) orschemas.OpenAI(all other callers) in theRequestConverter.Changes
transports/bifrost-http/integrations/openai.go: addedPreCallbackand updatedRequestConverterinCreateOpenAIListModelsRouteConfigs. The pattern mirrors the existing/v1/realtimeand/v1/responsesroutes in the same file.transports/bifrost-http/integrations/openai_list_models_test.go: 8 new tests covering the converter, the PreCallback, the Azure SDK path, explicit-provider passthrough, and invalid-request error handling.Type of change
Affected areas
How to test
Manual smoke test (two providers configured):
Before this fix the response contained
key_statusesentries for every provider. After the fix only the OpenAI provider is queried.Screenshots/Recordings
N/A (transport-layer Go change only).
Breaking changes
Related issues
Closes #2993
Discovered and originally fixed as a side-effect of PR #2775 (thiscantbeserious/bifrost). Extracted here as a focused change against main.
Security considerations
No auth, secrets, or PII involved. The change narrows the blast radius of a list-models call by scoping it to the intended provider.
Checklist
docs/contributing/README.mdand followed the guidelines